penn state
Solar Irradiation Forecasting using Genetic Algorithms
Gunasekaran, V., Kovi, K. K., Arja, S., Chimata, R.
Renewable energy forecasting is attaining greater importance due to its constant increase in contribution to the electrical power grids. Solar energy is one of the most significant contributors to renewable energy and is dependent on solar irradiation. For the effective management of electrical power grids, forecasting models that predict solar irradiation, with high accuracy, are needed. In the current study, Machine Learning techniques such as Linear Regression, Extreme Gradient Boosting and Genetic Algorithm Optimization are used to forecast solar irradiation. The data used for training and validation is recorded from across three different geographical stations in the United States that are part of the SURFRAD network. A Global Horizontal Index (GHI) is predicted for the models built and compared. Genetic Algorithm Optimization is applied to XGB to further improve the accuracy of solar irradiation prediction.
- North America > United States > Pennsylvania (0.04)
- North America > United States > New York (0.04)
- North America > United States > Nevada (0.04)
- (7 more...)
- Energy > Renewable > Solar (1.00)
- Energy > Power Industry (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Text generators may plagiarize beyond 'copy and paste'
Students may want to think twice before using a chatbot to complete their next assignment. Language models that generate text in response to user prompts plagiarize content in more ways than one, according to a Penn State-led research team that conducted the first study to directly examine the phenomenon. "Plagiarism comes in different flavors," said Dongwon Lee, professor of information sciences and technology at Penn State. "We wanted to see if language models not only copy and paste but resort to more sophisticated forms of plagiarism without realizing it." The researchers focused on identifying three forms of plagiarism: verbatim, or directly copying and pasting content; paraphrase, or rewording and restructuring content without citing the original source; and idea, or using the main idea from a text without proper attribution.
- North America > United States > Texas > Travis County > Austin (0.05)
- North America > United States > Mississippi (0.05)
Users trust AI as much as humans for flagging problematic content
Social media users may trust artificial intelligence (AI) as much as human editors to flag hate speech and harmful content, according to researchers at Penn State. The researchers said that when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower. The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory. "There's this dire need for content moderation on social media and more generally, online media," said Sundar, who is also an affiliate of Penn State's Institute for Computational and Data Sciences.
People who distrust fellow humans show greater trust in artificial intelligence
A person's distrust in humans predicts they will have more trust in artificial intelligence's ability to moderate content online, according to a recently published study. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media. "We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI's classification," said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. "Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias." The study, published in the journal of New Media & Society also found that "power users" who are experienced users of information technology, had the opposite tendency.
HCI and Designing for Democracy - Connected World
What is a democratic internet, and how does it intersect with the evolving nature of HCI? In fact, experts like Elizabeth Gerber, professor and co-director of the Center for Human Computer Interaction Design at Northwestern University, say there is not a single aspect of many humans' lives that is not touched by near-ubiquitous internet access and adoption of connected devices. And while "near ubiquitous" is not the same as ubiquitous and there are still a great number of people without easy access to an internet connection, Gerber says for those who do, computing technology is becoming as invisible and essential as oxygen. As a result, many forget the extent to which they have set up their lives and businesses to depend on it. The internet is a powerful tool, and humans can use it in many powerful ways--including to create social change.
- North America > United States > Michigan (0.05)
- Europe > Ukraine (0.04)
- Information Technology (0.70)
- Media > News (0.48)
- Government (0.47)
Artificial intelligence added to IT Service Desk chat
Whether you're working or studying on campus or remotely, technical support is only a click away, thanks to the IT Service Desk chat feature. Available 24 hours a day, seven days a week, the chat allows you to speak directly with a live agent at any time. As of Dec. 9, the chat also will incorporate an artificial intelligence (AI) component that can quickly answer many frequently asked questions. The AI "virtual agent" functions by scanning technical articles available in Penn State's Knowledge Base and other online resources to quickly answer questions related to many technical issues, from connecting to Wi-Fi to troubleshooting issues with video conferencing. If the AI needs more context to answer a question, it will ask for additional information.
Global Big Data Conference
In 2017, Google's Counter Abuse Technology team and Jigsaw, the organization working under Google parent company Alphabet to tackle cyberbullying and disinformation, released an AI-powered API for content moderation called Perspective. Perspective's goal is to "identify toxic comments that can undermine a civil exchange of ideas," offering a score from zero to 100 on how similar new comments are to others previously identified as toxic, defined as how likely a comment is to make someone leave a conversation. Jigsaw claims its AI can immediately generate an assessment of a phrase's toxicity more accurately than any keyword blacklist and faster than any human moderator. But studies show that technologies similar to Jigsaw's still struggle to overcome major challenges, including biases against specific subsets of users. For example, a team at Penn State recently found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used, public sentiment, and toxicity detection models.
- Information Technology > Artificial Intelligence > Natural Language (0.86)
- Information Technology > Communications > Social Media (0.57)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
AI behind deepfakes may power materials design innovations
The person staring back from the computer screen may not actually exist, thanks to artificial intelligence (AI) capable of generating convincing but ultimately fake images of human faces. Now this same technology may power the next wave of innovations in materials design, according to Penn State scientists. "We hear a lot about deepfakes in the news today -- AI that can generate realistic images of human faces that don't correspond to real people," said Wesley Reinhart, assistant professor of materials science and engineering and Institute for Computational and Data Sciences faculty co-hire, at Penn State. "That's exactly the same technology we used in our research. The scientists trained a generative adversarial network (GAN) to create novel refractory high-entropy alloys, materials that can withstand ultra-high temperatures while maintaining their strength and that are used in technology from turbine blades to rockets. "There are a lot of rules about what makes an image of a human face or what makes an alloy, and it would be really difficult for you to know what all those rules are or to write them down by hand," Reinhart said. "The whole principle of this GAN is you have two neural networks that basically compete in order to learn what those rules are, and then generate examples that follow the rules." The team combed through hundreds of published examples of alloys to create a training dataset. The network features a generator that creates new compositions and a critic that tries to discern whether they look realistic compared to the training dataset. If the generator is successful, it is able to make alloys that the critic believes are real, and as this adversarial game continues over many iterations, the model improves, the scientists said. After this training, the scientists asked the model to focus on creating alloy compositions with specific properties that would be ideal for use in turbine blades. "Our preliminary results show that generative models can learn complex relationships in order to generate novelty on demand," said Zi-Kui Liu, Dorothy Pate Enright Professor of Materials Science and Engineering at Penn State. It's really what we are missing in our computational community in materials science in general."
Penn State receives $25 million to enhance medical research, human health
Expanded partnerships, access to clinical trials, and new medical and behavioral treatments and interventions reaching individuals more quickly will benefit communities in Pennsylvania and beyond thanks to the renewal of Penn State's Clinical and Translational Science Award (CTSA) funded by the National Institutes of Health (NIH). The NIH's National Center for Advancing Translational Sciences (NCATS) awarded Penn State more than $25 million to provide critical clinical and translational research infrastructure and continue building collaborations across the University's campuses and with communities around the state. NCATS' CTSA Program develops innovative solutions to improve processes for turning laboratory, clinical and community research into health knowledge, interventions and treatments. CTSA institutions partner to advance biomedical and health research and share best practices and tools. Penn State is one of 64 funded CTSA organizations nationally and is among the few that serve primarily rural communities.
- Research Report > New Finding (0.38)
- Research Report > Experimental Study (0.38)
Penn State will offer Artificial Intelligence courses online - Exoborg
Penn State is officially becoming a member of the robot community. So, what exactly does this mean? The 63rd ranked university in the country is adding artificial intelligence (A.I.) to it's list of programs. The AI program will be offered online through their world Campus and it will be a 33 credit program. Penn State's AI program will be the first of it's kind in Penn State's history.